supervised classification
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- North America > United States > Oregon > Multnomah County > Portland (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (5 more...)
- Leisure & Entertainment (1.00)
- Health & Medicine > Therapeutic Area > Dermatology (0.67)
- Health & Medicine > Therapeutic Area > Oncology (0.45)
CoMViT: An Efficient Vision Backbone for Supervised Classification in Medical Imaging
Safdar, Aon, Saadeldin, Mohamed
Vision Transformers (ViTs) have demonstrated strong potential in medical imaging; however, their high computational demands and tendency to overfit on small datasets limit their applicability in real-world clinical scenarios. In this paper, we present CoMViT, a compact and generalizable Vision Transformer architecture optimized for resource-constrained medical image analysis. CoMViT integrates a convolutional tokenizer, diagonal masking, dynamic temperature scaling, and pooling-based sequence aggregation to improve performance and generalization. Through systematic architectural optimization, CoMViT achieves robust performance across twelve MedMNIST datasets while maintaining a lightweight design with only ~4.5M parameters. It matches or outperforms deeper CNN and ViT variants, offering up to 5-20x parameter reduction without sacrificing accuracy. Qualitative Grad-CAM analyses show that CoMViT consistently attends to clinically relevant regions despite its compact size. These results highlight the potential of principled ViT redesign for developing efficient and interpretable models in low-resource medical imaging settings.
- Europe > Ireland (0.05)
- North America > Canada > Quebec > Montreal (0.04)
- Europe > Switzerland (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- North America > United States > Oregon > Multnomah County > Portland (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (5 more...)
- Leisure & Entertainment (1.00)
- Health & Medicine > Therapeutic Area > Dermatology (0.67)
- Health & Medicine > Therapeutic Area > Oncology (0.45)
Replacing supervised classification learning by Slow Feature Analysis in spiking neural networks
Many models for computations in recurrent networks of neurons assume that the network state moves from some initial state to some fixed point attractor or limit cycle that represents the output of the computation. However experimental data show that in response to a sensory stimulus the network state moves from its initial state through a trajectory of network states and eventually returns to the initial state, without reaching an attractor or limit cycle in between. This type of network response, where salient information about external stimuli is encoded in characteristic trajectories of continuously varying network states, raises the question how a neural system could compute with such code, and arrive for example at a temporally stable classification of the external stimulus. We show that a known unsupervised learning algorithm, Slow Feature Analysis (SFA), could be an important ingredient for extracting stable information from these network trajectories. In fact, if sensory stimuli are more often followed by another stimulus from the same class than by a stimulus from another class, SFA approaches the classification capability of Fishers Linear Discriminant (FLD), a powerful algorithm for supervised learning. We apply this principle to simulated cortical microcircuits, and show that it enables readout neurons to learn discrimination of spoken digits and detection of repeating firing patterns within a stream of spike trains with the same firing statistics, without requiring any supervision for learning.
Cross-Entropy Is All You Need To Invert the Data Generating Process
Reizinger, Patrik, Bizeul, Alice, Juhos, Attila, Vogt, Julia E., Balestriero, Randall, Brendel, Wieland, Klindt, David
Supervised learning has become a cornerstone of modern machine learning, yet a comprehensive theory explaining its effectiveness remains elusive. Empirical phenomena, such as neural analogy-making and the linear representation hypothesis, suggest that supervised models can learn interpretable factors of variation in a linear fashion. Recent advances in self-supervised learning, particularly nonlinear Independent Component Analysis, have shown that these methods can recover latent structures by inverting the data generating process. We extend these identifiability results to parametric instance discrimination, then show how insights transfer to the ubiquitous setting of supervised learning with cross-entropy minimization. We prove that even in standard classification tasks, models learn representations of ground-truth factors of variation up to a linear transformation. We corroborate our theoretical contribution with a series of empirical studies. First, using simulated data matching our theoretical assumptions, we demonstrate successful disentanglement of latent factors. Second, we show that on DisLib, a widely-used disentanglement benchmark, simple classification tasks recover latent structures up to linear transformations. Finally, we reveal that models trained on ImageNet encode representations that permit linear decoding of proxy factors of variation. Together, our theoretical findings and experiments offer a compelling explanation for recent observations of linear representations, such as superposition in neural networks. This work takes a significant step toward a cohesive theory that accounts for the unreasonable effectiveness of supervised deep learning.
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.14)
- Europe > Switzerland > Zürich > Zürich (0.14)
- Asia > Japan > Honshū > Tōhoku > Iwate Prefecture > Morioka (0.05)
- (2 more...)
Scope Loss for Imbalanced Classification and RL Exploration
Burhani, Hasham, Shi, Xiao Qi, Jaegerman, Jonathan, Balicki, Daniel
We demonstrate equivalence between the reinforcement learning problem and the supervised classification problem. We consequently equate the exploration exploitation trade-off in reinforcement learning to the dataset imbalance problem in supervised classification, and find similarities in how they are addressed. From our analysis of the aforementioned problems we derive a novel loss function for reinforcement learning and supervised classification. Scope Loss, our new loss function, adjusts gradients to prevent performance losses from over-exploitation and dataset imbalances, without the need for any tuning. We test Scope Loss against SOTA loss functions over a basket of benchmark reinforcement learning tasks and a skewed classification dataset, and show that Scope Loss outperforms other loss functions.
- Leisure & Entertainment (0.46)
- Energy > Oil & Gas > Upstream (0.35)
- Education (0.34)
Revisiting Wright: Improving supervised classification of rat ultrasonic vocalisations using synthetic training data
Scott, K. Jack, Speers, Lucinda J., Bilkey, David K.
Rodents communicate through ultrasonic vocalizations (USVs). These calls are of interest because they provide insight into the development and function of vocal communication, and may prove to be useful as a biomarker for dysfunction in models of neurodevelopmental disorders. Rodent USVs can be categorised into different components and while manual classification is time consuming, advances in neural computing have allowed for fast and accurate identification and classification. Here, we adapt a convolutional neural network (CNN), VocalMat, created for analysing mice USVs, for use with rats. We codify a modified schema, adapted from that previously proposed by Wright et al. (2010), for classification, and compare the performance of our adaptation of VocalMat with a benchmark CNN, DeepSqueak. Additionally, we test the effect of inserting synthetic USVs into the training data of our classification network in order to reduce the workload involved in generating a training set. Our results show that the modified VocalMat outperformed the benchmark software on measures of both call identification, and classification. Additionally, we found that the augmentation of training data with synthetic images resulted in a marked improvement in the accuracy of VocalMat when it was subsequently used to analyse novel data. The resulting accuracy on the modified Wright categorizations was sufficiently high to allow for the application of this software in rat USV classification in laboratory conditions. Our findings also show that inserting synthetic USV calls into the training set leads to improvements in accuracy with little extra time-cost.
- Oceania > New Zealand (0.04)
- Europe > Italy (0.04)
N-Student Learning
Overfitting is a fundamental problem in the field of machine learning, and is especially important in the context of training with noisy data. As we scale our datasets, the amount of noise naturally increases due to the infeasibility of careful human labeling. In the following article, we will be introducing the main ideas behind N-Student Learning, a multi-network architecture that can be applied to any network in order to reduce the impact of overfitting. The setup also allows for precise control over how a network learns to model any noise or uncertainty in the data. All images unless otherwise noted are by the author.
- Education (1.00)
- Health & Medicine > Therapeutic Area (0.32)
The Riemann Hypothesis in One Picture - DataScienceCentral.com
I wrote this article for machine learning and analytic professionals in general. Actually, I describe a new visual, simple, intuitive method for supervised classification. It involves synthetic data and explainable AI. But at the same time, I describe in layman's terms the Riemann Hypothesis (RH). Also, I offer a new perspective on the subject for those who attempt to solve the most famous unsolved mathematical problem of all times.